Goto

Collaborating Authors

 filter bubble



Quantifying the Potential to Escape Filter Bubbles: A Behavior-Aware Measure via Contrastive Simulation

Feng, Difu, Xu, Qianqian, Wang, Zitai, Hua, Cong, Yang, Zhiyong, Huang, Qingming

arXiv.org Artificial Intelligence

Nowadays, recommendation systems have become crucial to online platforms, shaping user exposure by accurate preference modeling. However, such an exposure strategy can also reinforce users' existing preferences, leading to a notorious phenomenon named filter bubbles. Given its negative effects, such as group polarization, increasing attention has been paid to exploring reasonable measures to filter bubbles. However, most existing evaluation metrics simply measure the diversity of user exposure, failing to distinguish between algorithmic preference modeling and actual information confinement. In view of this, we introduce Bubble Escape Potential (BEP), a behavior-aware measure that quantifies how easily users can escape from filter bubbles. Specifically, BEP leverages a contrastive simulation framework that assigns different behavioral tendencies (e.g., positive vs. negative) to synthetic users and compares the induced exposure patterns. This design enables decoupling the effect of filter bubbles and preference modeling, allowing for more precise diagnosis of bubble severity. We conduct extensive experiments across multiple recommendation models to examine the relationship between predictive accuracy and bubble escape potential across different groups. To the best of our knowledge, our empirical results are the first to quantitatively validate the dilemma between preference modeling and filter bubbles. What's more, we observe a counter-intuitive phenomenon that mild random recommendations are ineffective in alleviating filter bubbles, which can offer a principled foundation for further work in this direction.



Avoiding Over-Personalization with Rule-Guided Knowledge Graph Adaptation for LLM Recommendations

Spadea, Fernando, Seneviratne, Oshani

arXiv.org Artificial Intelligence

We present a lightweight neuro-symbolic framework to mitigate over-personalization in LLM-based recommender systems by adapting user-side Knowledge Graphs (KGs) at inference time. Instead of retraining models or relying on opaque heuristics, our method restructures a user's Personalized Knowledge Graph (PKG) to suppress feature co-occurrence patterns that reinforce Personalized Information Environments (PIEs), i.e., algorithmically induced filter bubbles that constrain content diversity. These adapted PKGs are used to construct structured prompts that steer the language model toward more diverse, Out-PIE recommendations while preserving topical relevance. We introduce a family of symbolic adaptation strategies, including soft reweighting, hard inversion, and targeted removal of biased triples, and a client-side learning algorithm that optimizes their application per user. Experiments on a recipe recommendation benchmark show that personalized PKG adaptations significantly increase content novelty while maintaining recommendation quality, outperforming global adaptation and naive prompt-based methods.


Looking for Fairness in Recommender Systems

Logé, Cécile

arXiv.org Artificial Intelligence

Recommender systems can be found everywhere today, shaping our everyday experience whenever we're consuming content, ordering food, buying groceries online, or even just reading the news. Let's imagine we're in the process of building a recommender system to make content suggestions to users on social media. When thinking about fairness, it becomes clear there are several perspectives to consider: the users asking for tailored suggestions, the content creators hoping for some limelight, and society at large, navigating the repercussions of algorithmic recommendations. A shared fairness concern across all three is the emergence of filter bubbles, a side-effect that takes place when recommender systems are almost "too good", making recommendations so tailored that users become inadvertently confined to a narrow set of opinions/themes and isolated from alternative ideas. From the user's perspective, this is akin to manipulation. From the small content creator's perspective, this is an obstacle preventing them access to a whole range of potential fans. From society's perspective, the potential consequences are far-reaching, influencing collective opinions, social behavior and political decisions. How can our recommender system be fine-tuned to avoid the creation of filter bubbles, and ensure a more inclusive and diverse content landscape? Approaching this problem involves defining one (or more) performance metric to represent diversity, and tweaking our recommender system's performance through the lens of fairness. By incorporating this metric into our evaluation framework, we aim to strike a balance between personalized recommendations and the broader societal goal of fostering rich and varied cultures and points of view.


Mitigating Societal Cognitive Overload in the Age of AI: Challenges and Directions

Lahlou, Salem

arXiv.org Artificial Intelligence

Societal cognitive overload, driven by the deluge of inform ation and complexity in the AI age, poses a critical challenge to human well-being an d societal resilience. This paper argues that mitigating cognitive overload is not only essential for improving present-day life but also a crucial prerequisite fo r navigating the potential risks of advanced AI, including existential threats. W e exa mine how AI exacerbates cognitive overload through various mechanisms, incl uding information proliferation, algorithmic manipulation, automation anxiet ies, deregulation, and the erosion of meaning. The paper reframes the AI safety debate t o center on cognitive overload, highlighting its role as a bridge between near-te rm harms and long-term risks. It concludes by discussing potential institutional adaptations, research directions, and policy considerations that arise from adopti ng an overload-resilient perspective on human-AI alignment, suggesting pathways fo r future exploration rather than prescribing definitive solutions. W e stand at a precipice. Human societies are increasingly st ruggling to process the sheer volume and complexity of information in the digital age, a conditio n dramatically amplified by the rapid proliferation of artificial intelligence (AI). While Toffle r (1970) foresaw "future shock" from accelerating change and Eppler & Mengis (2004); Bawden & Robin son (2009) analyzed individual information overload, Byung-Chul Han, in his critique of ne oliberalism and technological domination (Han, 2017), argues that contemporary society faces a regime of technological domination that exploits and overwhelms the psyche. This exploitation and overwhelming of the psyche, now dramatically amplified by AI-driven information and comple xity, elevates information overload to a systemic crisis: societal cognitive overload .


Simulating Filter Bubble on Short-video Recommender System with Large Language Model Agents

Sukiennik, Nicholas, Wang, Haoyu, Zeng, Zailin, Gao, Chen, Li, Yong

arXiv.org Artificial Intelligence

An increasing reliance on recommender systems has led to concerns about the creation of filter bubbles on social media, especially on short video platforms like TikTok. However, their formation is still not entirely understood due to the complex dynamics between recommendation algorithms and user feedback. In this paper, we aim to shed light on these dynamics using a large language model-based simulation framework. Our work employs real-world short-video data containing rich video content information and detailed user-agents to realistically simulate the recommendation-feedback cycle. Through large-scale simulations, we demonstrate that LLMs can replicate real-world user-recommender interactions, uncovering key mechanisms driving filter bubble formation. We identify critical factors, such as demographic features and category attraction that exacerbate content homogenization. To mitigate this, we design and test interventions including various cold-start and feedback weighting strategies, showing measurable reductions in filter bubble effects. Our framework enables rapid prototyping of recommendation strategies, offering actionable solutions to enhance content diversity in real-world systems. Furthermore, we analyze how LLM-inherent biases may propagate through recommendations, proposing safeguards to promote equity for vulnerable groups, such as women and low-income populations. By examining the interplay between recommendation and LLM agents, this work advances a deeper understanding of algorithmic bias and provides practical tools to promote inclusive digital spaces.


Comparable Corpora: Opportunities for New Research Directions

Church, Kenneth

arXiv.org Artificial Intelligence

Most conference papers present new results, but this paper will focus more on opportunities for the audience to make their own contributions. This paper is intended to challenge the community to think more broadly about what we can do with comparable corpora. We will start with a review of the history, and then suggest new directions for future research. This was a keynote at BUCC-2025, a workshop associated with Coling-2025.


Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems

Ahn, Yongsu, Wolter, Quinn K, Dick, Jonilyn, Dick, Janet, Lin, Yu-Ru

arXiv.org Artificial Intelligence

Recommender systems have become integral to digital experiences, shaping user interactions and preferences across various platforms. Despite their widespread use, these systems often suffer from algorithmic biases that can lead to unfair and unsatisfactory user experiences. This study introduces an interactive tool designed to help users comprehend and explore the impacts of algorithmic harms in recommender systems. By leveraging visualizations, counterfactual explanations, and interactive modules, the tool allows users to investigate how biases such as miscalibration, stereotypes, and filter bubbles affect their recommendations. Informed by in-depth user interviews, this tool benefits both general users and researchers by increasing transparency and offering personalized impact assessments, ultimately fostering a better understanding of algorithmic biases and contributing to more equitable recommendation outcomes. This work provides valuable insights for future research and practical applications in mitigating bias and enhancing fairness in machine learning algorithms.


A survey on the impact of AI-based recommenders on human behaviours: methodologies, outcomes and future directions

Pappalardo, Luca, Ferragina, Emanuele, Citraro, Salvatore, Cornacchia, Giuliano, Nanni, Mirco, Rossetti, Giulio, Gezici, Gizem, Giannotti, Fosca, Lalli, Margherita, Gambetta, Daniele, Mauro, Giovanni, Morini, Virginia, Pansanella, Valentina, Pedreschi, Dino

arXiv.org Artificial Intelligence

Recommendation systems and assistants (from now on, recommenders) - algorithms suggesting items or providing solutions based on users' preferences or requests [99, 105, 141, 166] - influence through online platforms most actions of our day to day life. For example, recommendations on social media suggest new social connections, those on online retail platforms guide users' product choices, navigation services offer routes to desired destinations, and generative AI platforms produce content based on users' requests. Unlike other AI tools, such as medical diagnostic support systems, robotic vision systems, or autonomous driving, which assist in specific tasks or functions, recommenders are ubiquitous in online platforms, shaping our decisions and interactions instantly and profoundly. The influence recommenders exert on users' behaviour may generate long-lasting and often unintended effects on human-AI ecosystems [131], such as amplifying political radicalisation processes [82], increasing CO2 emissions in the environment [36] and amplifying inequality, biases and discriminations [120]. The interaction between humans and recommenders has been examined in various fields using different nomenclatures, research methods and datasets, often producing incongruent findings.